Picture for Mohammad Rostami

Mohammad Rostami

Semi-Supervised Masked Autoencoders: Unlocking Vision Transformer Potential with Limited Data

Add code
Jan 27, 2026
Viaarxiv icon

Finetune-Informed Pretraining Boosts Downstream Performance

Add code
Jan 27, 2026
Viaarxiv icon

CageDroneRF: A Large-Scale RF Benchmark and Toolkit for Drone Perception

Add code
Jan 06, 2026
Viaarxiv icon

TNG-CLIP:Training-Time Negation Data Generation for Negation Awareness of CLIP

Add code
May 24, 2025
Viaarxiv icon

Multi-modal Synthetic Data Training and Model Collapse: Insights from VLMs and Diffusion Models

Add code
May 10, 2025
Viaarxiv icon

Plug-and-Play AMC: Context Is King in Training-Free, Open-Set Modulation with LLMs

Add code
May 06, 2025
Viaarxiv icon

Hybrid Learners Do Not Forget: A Brain-Inspired Neuro-Symbolic Approach to Continual Learning

Add code
Mar 16, 2025
Viaarxiv icon

DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches

Add code
Feb 25, 2025
Figure 1 for DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches
Figure 2 for DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches
Figure 3 for DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches
Figure 4 for DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches
Viaarxiv icon

Cross-domain Few-shot Object Detection with Multi-modal Textual Enrichment

Add code
Feb 23, 2025
Figure 1 for Cross-domain Few-shot Object Detection with Multi-modal Textual Enrichment
Figure 2 for Cross-domain Few-shot Object Detection with Multi-modal Textual Enrichment
Figure 3 for Cross-domain Few-shot Object Detection with Multi-modal Textual Enrichment
Figure 4 for Cross-domain Few-shot Object Detection with Multi-modal Textual Enrichment
Viaarxiv icon

DenoMAE: A Multimodal Autoencoder for Denoising Modulation Signals

Add code
Jan 20, 2025
Viaarxiv icon